In the Terminator franchise, the machines weren't scary because they were learning from data and acting on it. They were frightening because they were learning from data and acting in an unintended way. These "unintended consequences" are one of the biggest ethical dilemmas facing the AI community today. Of course, this can happen because of programming error, as depicted here. But more likely, with AI, the error will occur while training the model. Enter: Microsoft Tay.
Reference: https://www.reddit.com/r/funny/comments/4x28gw/this_is_why_code_reviews_are_a_good_thing/?st=j770kl8d&sh=9fb83792